Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-round Iterative Fine-tuning
# Multi-round Iterative Fine-tuning
Gemma 2 9B It SPPO Iter3
An 8.9 billion parameter language model developed in the third iteration using self-play preference optimization, starting from google/gemma-2-9b-it and fine-tuned with the UltraFeedback dataset
Large Language Model
Transformers
English
G
UCLA-AGI
6,704
125
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase